40 research outputs found
Towards Domain Generalization for ECG and EEG Classification: Algorithms and Benchmarks
Despite their immense success in numerous fields, machine and deep learning
systems have not yet been able to firmly establish themselves in
mission-critical applications in healthcare. One of the main reasons lies in
the fact that when models are presented with previously unseen,
Out-of-Distribution samples, their performance deteriorates significantly. This
is known as the Domain Generalization (DG) problem. Our objective in this work
is to propose a benchmark for evaluating DG algorithms, in addition to
introducing a novel architecture for tackling DG in biosignal classification.
In this paper, we describe the Domain Generalization problem for biosignals,
focusing on electrocardiograms (ECG) and electroencephalograms (EEG) and
propose and implement an open-source biosignal DG evaluation benchmark.
Furthermore, we adapt state-of-the-art DG algorithms from computer vision to
the problem of 1D biosignal classification and evaluate their effectiveness.
Finally, we also introduce a novel neural network architecture that leverages
multi-layer representations for improved model generalizability. By
implementing the above DG setup we are able to experimentally demonstrate the
presence of the DG problem in ECG and EEG datasets. In addition, our proposed
model demonstrates improved effectiveness compared to the baseline algorithms,
exceeding the state-of-the-art in both datasets. Recognizing the significance
of the distribution shift present in biosignal datasets, the presented
benchmark aims at urging further research into the field of biomedical DG by
simplifying the evaluation process of proposed algorithms. To our knowledge,
this is the first attempt at developing an open-source framework for evaluating
ECG and EEG DG algorithms.Comment: Accepted in IEEE Transactions on Emerging Topics in Computational
Intelligenc
DALE: Differential Accumulated Local Effects for efficient and accurate global explanations
Accumulated Local Effect (ALE) is a method for accurately estimating feature
effects, overcoming fundamental failure modes of previously-existed methods,
such as Partial Dependence Plots. However, ALE's approximation, i.e. the method
for estimating ALE from the limited samples of the training set, faces two
weaknesses. First, it does not scale well in cases where the input has high
dimensionality, and, second, it is vulnerable to out-of-distribution (OOD)
sampling when the training set is relatively small. In this paper, we propose a
novel ALE approximation, called Differential Accumulated Local Effects (DALE),
which can be used in cases where the ML model is differentiable and an
auto-differentiable framework is accessible. Our proposal has significant
computational advantages, making feature effect estimation applicable to
high-dimensional Machine Learning scenarios with near-zero computational
overhead. Furthermore, DALE does not create artificial points for calculating
the feature effect, resolving misleading estimations due to OOD sampling.
Finally, we formally prove that, under some hypotheses, DALE is an unbiased
estimator of ALE and we present a method for quantifying the standard error of
the explanation. Experiments using both synthetic and real datasets demonstrate
the value of the proposed approach.Comment: 16 pages, to be published in Asian Conference of Machine Learning
(ACML) 202
Listen2YourHeart: A Self-Supervised Approach for Detecting Murmur in Heart-Beat Sounds
Heart murmurs are abnormal sounds present in heartbeats, caused by turbulent
blood flow through the heart. The PhysioNet 2022 challenge targets automatic
detection of murmur from audio recordings of the heart and automatic detection
of normal vs. abnormal clinical outcome. The recordings are captured from
multiple locations around the heart. Our participation investigates the
effectiveness of selfsupervised learning for murmur detection. We train the
layers of a backbone CNN in a self-supervised way with data from both this
year's and the 2016 challenge. We use two different augmentations on each
training sample, and normalized temperature-scaled cross-entropy loss. We
experiment with different augmentations to learn effective phonocardiogram
representations. To build the final detectors we train two classification
heads, one for each challenge task. We present evaluation results for all
combinations of the available augmentations, and for our multipleaugmentation
approach. Our team's, Listen2YourHeart, SSL murmur detection classifier
received a weighted accuracy score of 0.737 (ranked 13th out of 40 teams) and
an outcome identification challenge cost score of 11946 (ranked 7th out of 39
teams) on the hidden test set.Comment: To be published in the proceedings of CinC 2022 (https://cinc.org/).
This is a preprint version of the final pape
RHALE: Robust and Heterogeneity-aware Accumulated Local Effects
Accumulated Local Effects (ALE) is a widely-used explainability method for
isolating the average effect of a feature on the output, because it handles
cases with correlated features well. However, it has two limitations. First, it
does not quantify the deviation of instance-level (local) effects from the
average (global) effect, known as heterogeneity. Second, for estimating the
average effect, it partitions the feature domain into user-defined, fixed-sized
bins, where different bin sizes may lead to inconsistent ALE estimations. To
address these limitations, we propose Robust and Heterogeneity-aware ALE
(RHALE). RHALE quantifies the heterogeneity by considering the standard
deviation of the local effects and automatically determines an optimal
variable-size bin-splitting. In this paper, we prove that to achieve an
unbiased approximation of the standard deviation of local effects within each
bin, bin splitting must follow a set of sufficient conditions. Based on these
conditions, we propose an algorithm that automatically determines the optimal
partitioning, balancing the estimation bias and variance. Through evaluations
on synthetic and real datasets, we demonstrate the superiority of RHALE
compared to other methods, including the advantages of automatic bin splitting,
especially in cases with correlated features.Comment: Accepted at ECAI 2023 (European Conference on Artificial
Intelligence
VITALAS at TRECVID-2008
In this paper, we present our experiments in TRECVID 2008 about High-Level feature extraction task. This is the first year for our participation in TRECVID, our system adopts some popular approaches that other workgroups proposed before. We proposed 2 advanced low-level features NEW Gabor texture descriptor and the Compact-SIFT Codeword histogram. Our system applied well-known LIBSVM to train the SVM classifier for the basic classifier. In fusion step, some methods were employed such as the Voting, SVM-base, HCRF and Bootstrap Average AdaBoost(BAAB)
Partially Oblivious Neural Network Inference
Oblivious inference is the task of outsourcing a ML model, like
neural-networks, without disclosing critical and sensitive information, like
the model's parameters. One of the most prominent solutions for secure
oblivious inference is based on a powerful cryptographic tools, like
Homomorphic Encryption (HE) and/or multi-party computation (MPC). Even though
the implementation of oblivious inference systems schemes has impressively
improved the last decade, there are still significant limitations on the ML
models that they can practically implement. Especially when both the ML model
and the input data's confidentiality must be protected. In this paper, we
introduce the notion of partially oblivious inference. We empirically show that
for neural network models, like CNNs, some information leakage can be
acceptable. We therefore propose a novel trade-off between security and
efficiency. In our research, we investigate the impact on security and
inference runtime performance from the CNN model's weights partial leakage. We
experimentally demonstrate that in a CIFAR-10 network we can leak up to
of the model's weights with practically no security impact, while the necessary
HE-mutliplications are performed four times faster.Comment: P. Rizomiliotis, C. Diou, A. Triakosia, I. Kyrannas and K. Tserpes.
Partially oblivious neural network inference. In Proceedings of the 19th
International Conference on Security and Cryptography, SECRYPT (pp. 158-169),
202